Chuncheon
FTT-GRU: A Hybrid Fast Temporal Transformer with GRU for Remaining Useful Life Prediction
Chirukiri, Varun Teja, Cheerala, Udaya Bhasker, Kanta, Sandeep, Karim, Abdul, Damacharla, Praveen
Accurate prediction of the remaining useful life (RUL) of industrial machinery is essential for reducing downtime and optimizing maintenance schedules. Existing approaches, such as long short-term memory (LSTM) networks and convolutional neural networks (CNNs), often struggle to model both global temporal dependencies and fine-grained degradation trends in multivariate sensor data. We propose a hybrid model, FTT-GRU, which combines a Fast Temporal Transformer (FTT) -- a lightweight Transformer variant using linearized attention via fast Fourier transform (FFT) -- with a gated recurrent unit (GRU) layer for sequential modeling. To the best of our knowledge, this is the first application of an FTT with a GRU for RUL prediction on NASA CMAPSS, enabling simultaneous capture of global and local degradation patterns in a compact architecture. On CMAPSS FD001, FTT-GRU attains RMSE 30.76, MAE 18.97, and $R^2=0.45$, with 1.12 ms CPU latency at batch=1. Relative to the best published deep baseline (TCN--Attention), it improves RMSE by 1.16\% and MAE by 4.00\%. Training curves averaged over $k=3$ runs show smooth convergence with narrow 95\% confidence bands, and ablations (GRU-only, FTT-only) support the contribution of both components. These results demonstrate that a compact Transformer-RNN hybrid delivers accurate and efficient RUL predictions on CMAPSS, making it suitable for real-time industrial prognostics.
- North America > United States > Texas > Montgomery County > The Woodlands (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (2 more...)
- Government > Space Agency (0.50)
- Government > Regional Government > North America Government > United States Government (0.50)
Disaster Informatics after the COVID-19 Pandemic: Bibliometric and Topic Analysis based on Large-scale Academic Literature
Tran, Ngan, Chen, Haihua, Cleveland, Ana, Zhou, Yuhan
This study presents a comprehensive bibliometric and topic analysis of the disaster informatics literature published between January 2020 to September 2022. Leveraging a large-scale corpus and advanced techniques such as pre-trained language models and generative AI, we identify the most active countries, institutions, authors, collaboration networks, emergent topics, patterns among the most significant topics, and shifts in research priorities spurred by the COVID-19 pandemic. Our findings highlight (1) countries that were most impacted by the COVID-19 pandemic were also among the most active, with each country having specific research interests, (2) countries and institutions within the same region or share a common language tend to collaborate, (3) top active authors tend to form close partnerships with one or two key partners, (4) authors typically specialized in one or two specific topics, while institutions had more diverse interests across several topics, and (5) the COVID-19 pandemic has influenced research priorities in disaster informatics, placing greater emphasis on public health. We further demonstrate that the field is converging on multidimensional resilience strategies and cross-sectoral data-sharing collaborations or projects, reflecting a heightened awareness of global vulnerability and interdependency. Collecting and quality assurance strategies, data analytic practices, LLM-based topic extraction and summarization approaches, and result visualization tools can be applied to comparable datasets or solve similar analytic problems. By mapping out the trends in disaster informatics, our analysis offers strategic insights for policymakers, practitioners, and scholars aiming to enhance disaster informatics capacities in an increasingly uncertain and complex risk landscape.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Texas > Coleman County (0.14)
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.14)
- (70 more...)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Epidemiology (1.00)
Artificial Intelligence for Pediatric Height Prediction Using Large-Scale Longitudinal Body Composition Data
Chun, Dohyun, Jung, Hae Woon, Kang, Jongho, Jang, Woo Young, Kim, Jihun
Height g rowth serves as a key health indicator, reflecting the interplay of genetic, environmental, and socioeconomic factors (Norris et al., 2022; Baxter - Jones et al., 2011; Hargreaves et al., 2022). Monitoring height growth enables early detection of disorders, facilitating timely interventions (Saari et al., 2015; Craig et al., 2011; Grote et al., 2008; Zhang et al., 2016). Accurate future height prediction is essential for diagnosing growth disorders, initiating hormone therapy, and evaluating treatment efficacy (Collett - Solberg et al., 2019; Ostojic, 2013; Cuttler & Silvers, 2004). Traditional height prediction methods rely on skeletal maturity assessment using hand - wrist radiographs. These include the Bayley - Pinneau (Bayley and Pinneau, 1952), Tanner - Whitehouse (Tanner et al., 1975), and Roche - Wainer - Thissen (Roche et al., 1975) met hods. However, these approaches have limitations including radia tion exposure, the need for specialized expertise, and high interobserver variability (Bull et al., 1999; Chávez - Vázquez et al., 2024; Prokop - Piotrkowska et al., 2021).
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > New York (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.66)
- Health & Medicine > Diagnostic Medicine > Imaging (0.48)
Detecting LLM-Generated Korean Text through Linguistic Feature Analysis
Park, Shinwoo, Kim, Shubin, Kim, Do-Kyung, Han, Yo-Sub
The rapid advancement of large language models (LLMs) increases the difficulty of distinguishing between human-written and LLM-generated text. Detecting LLM-generated text is crucial for upholding academic integrity, preventing plagiarism, protecting copyrights, and ensuring ethical research practices. Most prior studies on detecting LLM-generated text focus primarily on English text. However, languages with distinct morphological and syntactic characteristics require specialized detection approaches. Their unique structures and usage patterns can hinder the direct application of methods primarily designed for English. Among such languages, we focus on Korean, which has relatively flexible spacing rules, a rich morphological system, and less frequent comma usage compared to English. We introduce KatFish, the first benchmark dataset for detecting LLM-generated Korean text. The dataset consists of text written by humans and generated by four LLMs across three genres. By examining spacing patterns, part-of-speech diversity, and comma usage, we illuminate the linguistic differences between human-written and LLM-generated Korean text. Building on these observations, we propose KatFishNet, a detection method specifically designed for the Korean language. KatFishNet achieves an average of 19.78% higher AUROC compared to the best-performing existing detection method. Our code and data are available at https://github.com/Shinwoo-Park/detecting_llm_generated_korean_text_through_linguistic_analysis.
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > South Korea > Gangwon-do > Chuncheon (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
Transparent Networks for Multivariate Time Series
Kim, Minkyu, Lee, Suan, Kim, Jinho
Transparent models, which are machine learning models that produce inherently interpretable predictions, are receiving significant attention in high-stakes domains. However, despite much real-world data being collected as time series, there is a lack of studies on transparent time series models. To address this gap, we propose a novel transparent neural network model for time series called Generalized Additive Time Series Model (GATSM). GATSM consists of two parts: 1) independent feature networks to learn feature representations, and 2) a transparent temporal module to learn temporal patterns across different time steps using the feature representations. This structure allows GATSM to effectively capture temporal patterns and handle dynamic-length time series while preserving transparency. Empirical experiments show that GATSM significantly outperforms existing generalized additive models and achieves comparable performance to black-box time series models, such as recurrent neural networks and Transformer. In addition, we demonstrate that GATSM finds interesting patterns in time series.
- Oceania > Australia (0.04)
- Asia > South Korea > Gangwon-do > Chuncheon (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > California (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Pulmonary/Respiratory Diseases (0.67)
KULTURE Bench: A Benchmark for Assessing Language Model in Korean Cultural Context
Wang, Xiaonan, Yeo, Jinyoung, Lim, Joon-Ho, Kim, Hansaem
Large language models have exhibited significant enhancements in performance across various tasks. However, the complexity of their evaluation increases as these models generate more fluent and coherent content. Current multilingual benchmarks often use translated English versions, which may incorporate Western cultural biases that do not accurately assess other languages and cultures. To address this research gap, we introduce KULTURE Bench, an evaluation framework specifically designed for Korean culture that features datasets of cultural news, idioms, and poetry. It is designed to assess language models' cultural comprehension and reasoning capabilities at the word, sentence, and paragraph levels. Using the KULTURE Bench, we assessed the capabilities of models trained with different language corpora and analyzed the results comprehensively. The results show that there is still significant room for improvement in the models' understanding of texts related to the deeper aspects of Korean culture.
- Asia > China (0.29)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > North Korea > Hamgyong-namdo > Hamhung (0.04)
- (8 more...)
Quantum Multi-Agent Reinforcement Learning for Cooperative Mobile Access in Space-Air-Ground Integrated Networks
Kim, Gyu Seon, Cho, Yeryeong, Chung, Jaehyun, Park, Soohyun, Jung, Soyi, Han, Zhu, Kim, Joongheon
Achieving global space-air-ground integrated network (SAGIN) access only with CubeSats presents significant challenges such as the access sustainability limitations in specific regions (e.g., polar regions) and the energy efficiency limitations in CubeSats. To tackle these problems, high-altitude long-endurance unmanned aerial vehicles (HALE-UAVs) can complement these CubeSat shortcomings for providing cooperatively global access sustainability and energy efficiency. However, as the number of CubeSats and HALE-UAVs, increases, the scheduling dimension of each ground station (GS) increases. As a result, each GS can fall into the curse of dimensionality, and this challenge becomes one major hurdle for efficient global access. Therefore, this paper provides a quantum multi-agent reinforcement Learning (QMARL)-based method for scheduling between GSs and CubeSats/HALE-UAVs in order to improve global access availability and energy efficiency. The main reason why the QMARL-based scheduler can be beneficial is that the algorithm facilitates a logarithmic-scale reduction in scheduling action dimensions, which is one critical feature as the number of CubeSats and HALE-UAVs expands. Additionally, individual GSs have different traffic demands depending on their locations and characteristics, thus it is essential to provide differentiated access services. The superiority of the proposed scheduler is validated through data-intensive experiments in realistic CubeSat/HALE-UAV settings.
- North America > United States > Texas > Harris County > Houston (0.28)
- North America > United States > California > Orange County > Irvine (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (12 more...)
- Personal (0.46)
- Research Report (0.40)
- Energy (1.00)
- Aerospace & Defense > Aircraft (1.00)
- Education (0.93)
Explicit Feature Interaction-aware Graph Neural Networks
Kim, Minkyu, Choi, Hyun-Soo, Kim, Jinho
Graph neural networks (GNNs) are powerful tools for handling graph-structured data. However, their design often limits them to learning only higher-order feature interactions, leaving low-order feature interactions overlooked. To address this problem, we introduce a novel GNN method called explicit feature interaction-aware graph neural network (EFI-GNN). Unlike conventional GNNs, EFI-GNN is a multilayer linear network designed to model arbitrary-order feature interactions explicitly within graphs. To validate the efficacy of EFI-GNN, we conduct experiments using various datasets. The experimental results demonstrate that EFI-GNN has competitive performance with existing GNNs, and when a GNN is jointly trained with EFI-GNN, predictive performance sees an improvement. Furthermore, the predictions made by EFI-GNN are interpretable, owing to its linear construction. The source code of EFI-GNN is available at https://github.com/gim4855744/EFI-GNN
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > South Korea > Gangwon-do > Chuncheon (0.05)
- Asia > South Korea > Seoul > Seoul (0.05)
- (3 more...)
Mol-AIR: Molecular Reinforcement Learning with Adaptive Intrinsic Rewards for Goal-directed Molecular Generation
Park, Jinyeong, Ahn, Jaegyoon, Choi, Jonghwan, Kim, Jibum
Optimizing techniques for discovering molecular structures with desired properties is crucial in artificial intelligence(AI)-based drug discovery. Combining deep generative models with reinforcement learning has emerged as an effective strategy for generating molecules with specific properties. Despite its potential, this approach is ineffective in exploring the vast chemical space and optimizing particular chemical properties. To overcome these limitations, we present Mol-AIR, a reinforcement learning-based framework using adaptive intrinsic rewards for effective goal-directed molecular generation. Mol-AIR leverages the strengths of both history-based and learning-based intrinsic rewards by exploiting random distillation network and counting-based strategies. In benchmark tests, Mol-AIR demonstrates superior performance over existing approaches in generating molecules with desired properties without any prior knowledge, including penalized LogP, QED, and celecoxib similarity. We believe that Mol-AIR represents a significant advancement in drug discovery, offering a more efficient path to discovering novel therapeutics.
- Asia > South Korea > Incheon > Incheon (0.04)
- Asia > South Korea > Gangwon-do > Chuncheon (0.04)
- Asia > Middle East > Jordan (0.04)
A Korean Legal Judgment Prediction Dataset for Insurance Disputes
Kwak, Alice Saebom, Jeong, Cheonkam, Lim, Ji Weon, Min, Byeongcheol
This paper introduces a Korean legal judgment prediction (LJP) dataset for insurance disputes. Successful LJP models on insurance disputes can benefit insurance companies and their customers. It can save both sides' time and money by allowing them to predict how the result would come out if they proceed to the dispute mediation process. As is often the case with low-resource languages, there is a limitation on the amount of data available for this specific task. To mitigate this issue, we investigate how one can achieve a good performance despite the limitation in data. In our experiment, we demonstrate that Sentence Transformer Fine-tuning (SetFit, Tunstall et al., 2022) is a good alternative to standard fine-tuning when training data are limited. The models fine-tuned with the SetFit approach on our data show similar performance to the Korean LJP benchmark models (Hwang et al., 2022) despite the much smaller data size.
- North America > United States > Arizona (0.05)
- Asia > South Korea > Gangwon-do > Chuncheon (0.04)
- Asia > China (0.04)
- Law (1.00)
- Banking & Finance > Insurance (1.00)